This is a combination of these analyses used for loop. Code is a combination of Daniela and Shuyings code. Might add Nicks later
Below is a list of sections included here. Including summaries of the white matter analyses and completed figures.
Shapiro-Wilk normality tests were conducted to assess violations of normality of the independent and dependent variables before conducting correlations between spatial navigation dependent variables and sex hormones. To control for chronological age while assessing the relationship between sex steroid hormones and navigational strategy, partial Spearman rank correlations were conducted if the Shapiro-Wilk normality tests were statistically significant (p < 0.05); otherwise, Pearson correlations were conducted. Based on existing evidence of sex hormones’ influence on navigation from the animal literature and strong a priori predictions that estradiol would be positively associated with navigation performance, and follicle-stimulating hormone would show an opposing effect, we conducted one-tailed analyses for these tests, controlling for chronological age. Two-tailed analyses were used for hormones without a strong a priori hypothesis (progesterone, testosterone). Men and women were analyzed separately. For men, we conducted a one-tailed correlation for testosterone to be in the positive direction, while we conducted a two-tailed correlation for testosterone for women.
** notes as of 2025 ** - T1 HP volume extracted using freesurfer recon all. - Corrected using TIV from free surfer
Total hippocampus (from T1-weighted whole brain scans) and hippocampal subfield volumes were corrected using participant’s total intracranial volume (TIV) to remove size Figure 4.1. Investigating volumetric differences using segmentation of the medial temporal lobe and total hippocampus region. (A) Sample slice of the medial temporal lobe cortex and hippocampus segmented into hippocampal subfields using the Automatic Segmentation of Hippocampal Subfields software. Labeled subfields include: CA1 (cornu ammonis), CA2/3, DG (dentate gyrus), SUB (subiculum), ERC (entorhinal cortex), PRC (perirhinal cortex), and PHC (parahippocampal cortex). Total hippocampus is computed by aggregating subfields CA1, CA2/3, DG, and SUB. Medial temporal lobe is computed by aggregating all the subfields. (B) Women (n = 74, M = 0.56) tend to have larger T1 total hippocampal volume than men (n = 32, M = 0.48; t(68) = 9.72, p < 0.001). Boxplot endpoints indicate the 25th and 75th percentile, and the black line within the boxplot indicates the median value while the black point within the boxplot indicates the mean value. p-values: *** p < 0.001.
bias in comparisons. In addition to the total hippocampal volume from the T1-weighted scans, another measure of total hippocampal volume from the T2-weighted hippocampal subfield scans was calculated by taking the sum of the CA1, CA2/3, dentate gyrus, and subiculum subfield volumes after adjusted for TIV. These structures make up the hippocampus region based on the anatomical components of the medial temporal lobe system (Squire et al., 2004; Squire & Zola-Morgan, 1991). An average of the left and right grey matter volume (mm3) for the total hippocampus and the individual subfields was used for analysis. For all the statistical tests mentioned, corrections for multiple comparisons were performed using Benjamini, Hochberg, and Yekutieli p-adjustments to control the false discovery rate.
Reading in our main LOOP CSV and creating large dataframes for midlife and young
The list below are from shuyings original raw data. We will ignore the old T1. The T2 here are already corrected for TIV.
| columns |
|---|
| t1_vbm_tiv |
| t1_vbm_gmv |
| t1_vbm_wmv |
| t1_vbm_csf |
| t1_vol_left_hipp_aal_2d_d1_r |
| t1_vol_right_hipp_aal_2d_d1_r |
| t1_vol_left_hipp_aal_3d_d1_s |
| t1_vol_right_hipp_aal_3d_d1_s |
| t2hipp_vol_avg_ca1 |
| t2hipp_vol_avg_ca23 |
| t2hipp_vol_avg_dg |
| t2hipp_vol_avg_erc |
| t2hipp_vol_avg_phc |
| t2hipp_vol_avg_prc |
| t2hipp_vol_avg_sub |
| t2hipp_vol_left_ca1 |
| t2hipp_vol_left_ca23 |
| t2hipp_vol_left_dg |
| t2hipp_vol_left_erc |
| t2hipp_vol_left_phc |
| t2hipp_vol_left_prc |
| t2hipp_vol_left_sub |
| t2hipp_vol_right_ca1 |
| t2hipp_vol_right_ca23 |
| t2hipp_vol_right_dg |
| t2hipp_vol_right_prc |
| t2hipp_vol_right_sub |
Total N - 43
# Let's create a clean df to work with here and include only the columns we want
midlife_HP_df <-
midlife_raw_df %>% dplyr::select(
"subject_id",
"sex",
"age_spatial_years",
"repo_status",
"loop_pe_rad3_m",
"loop_pe_avg_m",
"loop_de_rad3_degree",
"loop_de_avg_degree",
"loop_ae_rad3_degree",
"loop_ae_avg_degree",
"t1_vbm_tiv",
"t1_vbm_gmv",
"t1_vbm_wmv",
"t1_vbm_csf",
"t1_vol_left_hipp_aal_2d_d1_r",
"t1_vol_right_hipp_aal_2d_d1_r",
"t1_vol_left_hipp_aal_3d_d1_s",
"t1_vol_right_hipp_aal_3d_d1_s",
"t2hipp_vol_avg_ca1",
"t2hipp_vol_avg_ca23",
"t2hipp_vol_avg_dg",
"t2hipp_vol_avg_erc",
"t2hipp_vol_avg_phc",
"t2hipp_vol_avg_prc",
"t2hipp_vol_avg_sub",
"t2hipp_vol_left_ca1" ,
"t2hipp_vol_left_ca23",
"t2hipp_vol_left_dg",
"t2hipp_vol_left_erc",
"t2hipp_vol_left_phc",
"t2hipp_vol_left_prc",
"t2hipp_vol_left_sub" ,
"t2hipp_vol_right_ca1",
"t2hipp_vol_right_ca23",
"t2hipp_vol_right_dg",
"t2hipp_vol_right_prc",
"t2hipp_vol_right_sub",
"Left-Hippocampus",
"Right-Hippocampus",
"eTIV"
) %>% mutate(avg_t1_hipp = (.$`Left-Hippocampus` + .$`Right-Hippocampus`) /
2) %>% filter(!is.na(eTIV)) # Need to make sure we remove subj without scan # N=43
midlife_HP_female_df <- midlife_HP_df %>% filter(sex=="Female")
midlife_HP_male_df <- midlife_HP_df %>% filter(sex== "Male")
knitr::kable(normality_midlife_HP) %>% kable_styling(bootstrap_options = c("striped", "hover", "condensed")) %>% scroll_box(width = "800px", height = "300px")
| statistic | pvalue | method | variable |
|---|---|---|---|
| 0.979930524959475 | 0.701056792400381 | Shapiro-Wilk normality test | midlife_HP_df\(loop_pe_rad3_m </td> </tr> <tr> <td style="text-align:left;"> 0.969511101559754 </td> <td style="text-align:left;"> 0.632694357678411 </td> <td style="text-align:left;"> Shapiro-Wilk normality test </td> <td style="text-align:left;"> midlife_HP_female_df\)loop_pe_rad3_m |
| 0.980825888472044 | 0.979309814230114 | Shapiro-Wilk normality test | midlife_HP_male_df\(loop_pe_rad3_m </td> </tr> <tr> <td style="text-align:left;"> 0.958951620422575 </td> <td style="text-align:left;"> 0.126913086034452 </td> <td style="text-align:left;"> Shapiro-Wilk normality test </td> <td style="text-align:left;"> midlife_HP_df\)loop_pe_avg_m |
| 0.932701897868634 | 0.0898823823710984 | Shapiro-Wilk normality test | midlife_HP_female_df\(loop_pe_avg_m </td> </tr> <tr> <td style="text-align:left;"> 0.959742019099185 </td> <td style="text-align:left;"> 0.626635989107761 </td> <td style="text-align:left;"> Shapiro-Wilk normality test </td> <td style="text-align:left;"> midlife_HP_male_df\)loop_pe_avg_m |
| 0.939301920132143 | 0.0360882364913906 | Shapiro-Wilk normality test | midlife_HP_df\(loop_de_rad3_degree </td> </tr> <tr> <td style="text-align:left;"> 0.984861143748585 </td> <td style="text-align:left;"> 0.961650420612648 </td> <td style="text-align:left;"> Shapiro-Wilk normality test </td> <td style="text-align:left;"> midlife_HP_female_df\)loop_de_rad3_degree |
| 0.673250270692041 | 0.000197903436428711 | Shapiro-Wilk normality test | midlife_HP_male_df\(loop_de_rad3_degree </td> </tr> <tr> <td style="text-align:left;"> 0.978359619965325 </td> <td style="text-align:left;"> 0.583947583573168 </td> <td style="text-align:left;"> Shapiro-Wilk normality test </td> <td style="text-align:left;"> midlife_HP_df\)loop_de_avg_degree |
| 0.980483557611062 | 0.88401546577215 | Shapiro-Wilk normality test | midlife_HP_female_df\(loop_de_avg_degree </td> </tr> <tr> <td style="text-align:left;"> 0.926780583374995 </td> <td style="text-align:left;"> 0.192192441466166 </td> <td style="text-align:left;"> Shapiro-Wilk normality test </td> <td style="text-align:left;"> midlife_HP_male_df\)loop_de_avg_degree |
| 0.969348782601871 | 0.358539147520987 | Shapiro-Wilk normality test | midlife_HP_df\(loop_ae_rad3_degree </td> </tr> <tr> <td style="text-align:left;"> 0.961043339052538 </td> <td style="text-align:left;"> 0.435645190326644 </td> <td style="text-align:left;"> Shapiro-Wilk normality test </td> <td style="text-align:left;"> midlife_HP_female_df\)loop_ae_rad3_degree |
| 0.966162297290397 | 0.821589067567862 | Shapiro-Wilk normality test | midlife_HP_male_df\(loop_ae_rad3_degree </td> </tr> <tr> <td style="text-align:left;"> 0.892510350444314 </td> <td style="text-align:left;"> 0.000747247688039471 </td> <td style="text-align:left;"> Shapiro-Wilk normality test </td> <td style="text-align:left;"> midlife_HP_df\)loop_ae_avg_degree |
| 0.873682962552045 | 0.00424606303747878 | Shapiro-Wilk normality test | midlife_HP_female_df\(loop_ae_avg_degree </td> </tr> <tr> <td style="text-align:left;"> 0.922588887317344 </td> <td style="text-align:left;"> 0.163331176169011 </td> <td style="text-align:left;"> Shapiro-Wilk normality test </td> <td style="text-align:left;"> midlife_HP_male_df\)loop_ae_avg_degree |
loop_summarystats <- midlife_HP_df %>%
group_by(sex) %>%
summarize(n_subject = n(),
age_mean = mean(age_spatial_years),
age_Sd = sd(age_spatial_years),
AE_rad3 = mean(loop_ae_rad3_degree,na.rm=TRUE),
AE_avg = mean(loop_ae_avg_degree ,na.rm=TRUE),
PE_rad3 = mean(loop_pe_rad3_m,na.rm=TRUE),
PE_avg = mean(loop_pe_avg_m,na.rm=TRUE),
DT_rad3 = mean(loop_de_rad3_degree,na.rm=TRUE),
DT_avg = mean(loop_de_avg_degree,na.rm=TRUE)) %>% as.data.frame()
knitr::kable(loop_summarystats) %>% kable_styling(bootstrap_options = c("striped", "hover", "condensed")) %>% scroll_box(width = "800px", height = "200")
| sex | n_subject | age_mean | age_Sd | AE_rad3 | AE_avg | PE_rad3 | PE_avg | DT_rad3 | DT_avg |
|---|---|---|---|---|---|---|---|---|---|
| Female | 26 | 50.23077 | 3.701974 | 69.8513 | 58.99396 | 3.130604 | 1.846265 | 393.4124 | 375.6190 |
| Male | 17 | 50.35294 | 3.920159 | 64.9243 | 51.29080 | 2.871218 | 1.632895 | 328.1449 | 338.1785 |
Shuying originally did an adjustment. so that’s waht we’re doing below. First we need to correct the T1 hippocampal volumes for TIV. T1 volumes are coming form freesurfer but TIV is coming from VBM
The reason i am using VBM from shuying and not freesurfer is because the freesurfer metrics are different and i’m not sure that the function/math shuying used previosly works for the free surfer metric.
#v contains adjusted hip
# 1 Create function for apply to variables
dividebyTIV <- function(x, na.rm = FALSE) (x/midlife_HP_df$t1_vbm_tiv)
# 2 Let's correct by mutating the columns using the TIV from freesurfer
midlife_HP_df_adj <- midlife_HP_df %>% mutate_at(vars(avg_t1_hipp, `Left-Hippocampus`, `Right-Hippocampus`),
dividebyTIV) %>%
# multiplying to get proportions
mutate(avg_t1_hipp = avg_t1_hipp*100,
`Left-Hippocampus` = `Left-Hippocampus`*100,
`Right-Hippocampus` = `Right-Hippocampus`*100)
Now that things have been adjusted I need to do correlations
** position Error **
# Use hp data frame adjusted
ggscatter(midlife_HP_df_adj, x = "avg_t1_hipp", y = "loop_pe_avg_m",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Position Error at average (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
# Use hp data frame adjusted
ggscatter(midlife_HP_df_adj, x = "avg_t1_hipp", y = "loop_pe_rad3_m",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Position Error at aver3.0 (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 4 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 4 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 4 rows containing missing values (`geom_point()`).
** Angular Error**
# Use hp data frame adjusted
ggscatter(midlife_HP_df_adj, x = "avg_t1_hipp", y = "loop_ae_avg_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Angular Error at average (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
# Use hp data frame adjusted
ggscatter(midlife_HP_df_adj, x = "avg_t1_hipp", y = "loop_ae_rad3_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Angular Error at aver3.0 (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 4 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 4 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 4 rows containing missing values (`geom_point()`).
** Degrees Traveled **
# Use hp data frame adjusted
ggscatter(midlife_HP_df_adj, x = "avg_t1_hipp", y = "loop_de_avg_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "degrees traveled at average (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
# Use hp data frame adjusted
ggscatter(midlife_HP_df_adj, x = "avg_t1_hipp", y = "loop_de_rad3_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "spearman",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "degrees traveled at 3.0 (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 4 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 4 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 4 rows containing missing values (`geom_point()`).
** position Error **
# Use hp data frame adjusted
ggscatter(midlife_HP_df_adj, x = "Left-Hippocampus", y = "loop_pe_avg_m",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Position Error at average (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
# Use hp data frame adjusted
ggscatter(midlife_HP_df_adj, x = "Left-Hippocampus", y = "loop_pe_rad3_m",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Position Error at aver3.0 (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 4 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 4 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 4 rows containing missing values (`geom_point()`).
** Angular Error**
# Use hp data frame adjusted
ggscatter(midlife_HP_df_adj, x = "Left-Hippocampus", y = "loop_ae_avg_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Angular Error at average (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
# Use hp data frame adjusted
ggscatter(midlife_HP_df_adj, x = "Left-Hippocampus", y = "loop_ae_rad3_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Angular Error at aver3.0 (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 4 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 4 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 4 rows containing missing values (`geom_point()`).
** Degrees Traveled **
# Use hp data frame adjusted
ggscatter(midlife_HP_df_adj, x = "Left-Hippocampus", y = "loop_de_avg_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "degrees traveled at average (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
# Use hp data frame adjusted
ggscatter(midlife_HP_df_adj, x = "Left-Hippocampus", y = "loop_de_rad3_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "spearman",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "degrees traveled at 3.0 (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 4 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 4 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 4 rows containing missing values (`geom_point()`).
** position Error **
# Use hp data frame adjusted
ggscatter(midlife_HP_df_adj, x = "Right-Hippocampus", y = "loop_pe_avg_m",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Position Error at average (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
# Use hp data frame adjusted
ggscatter(midlife_HP_df_adj, x = "Right-Hippocampus", y = "loop_pe_rad3_m",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Position Error at aver3.0 (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 4 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 4 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 4 rows containing missing values (`geom_point()`).
** Angular Error**
# Use hp data frame adjusted
ggscatter(midlife_HP_df_adj, x = "Right-Hippocampus", y = "loop_ae_avg_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Angular Error at average (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
# Use hp data frame adjusted
ggscatter(midlife_HP_df_adj, x = "Right-Hippocampus", y = "loop_ae_rad3_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Angular Error at aver3.0 (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 4 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 4 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 4 rows containing missing values (`geom_point()`).
** Degrees Traveled **
# Use hp data frame adjusted
ggscatter(midlife_HP_df_adj, x = "Right-Hippocampus", y = "loop_de_avg_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "degrees traveled at average (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
# Use hp data frame adjusted
ggscatter(midlife_HP_df_adj, x = "Right-Hippocampus", y = "loop_de_rad3_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "spearman",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "degrees traveled at 3.0 (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 4 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 4 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 4 rows containing missing values (`geom_point()`).
** Position Error ** - Position error is not significantly associated with t1 total hippocampus R=-0.63, p=0.53
# Use hp data frame adjusted
pcor.test(midlife_HP_df$loop_pe_avg_m, midlife_HP_df$avg_t1_hipp, midlife_HP_df$eTIV)
## estimate p.value statistic n gp Method
## 1 -0.09949911 0.5307073 -0.6324259 43 1 pearson
# Use hp data frame adjusted
midlife_hp_rad3 <- midlife_HP_df %>% filter(!is.na(loop_pe_rad3_m))
pcor.test(midlife_hp_rad3$loop_pe_rad3_m, midlife_hp_rad3$avg_t1_hipp, midlife_hp_rad3$eTIV)
## estimate p.value statistic n gp Method
## 1 -0.1429717 0.391828 -0.8667343 39 1 pearson
** Angular Error **
# Use hp data frame adjusted
pcor.test(midlife_HP_df$loop_ae_avg_degree, midlife_HP_df$avg_t1_hipp, midlife_HP_df$eTIV)
## estimate p.value statistic n gp Method
## 1 -0.0569105 0.7203568 -0.3605179 43 1 pearson
# Use hp data frame adjusted
midlife_hp_rad3 <- midlife_HP_df %>% filter(!is.na(loop_ae_rad3_degree))
pcor.test(midlife_hp_rad3$loop_ae_rad3_degree, midlife_hp_rad3$avg_t1_hipp, midlife_hp_rad3$eTIV)
## estimate p.value statistic n gp Method
## 1 -0.1494153 0.3706105 -0.9066698 39 1 pearson
** Degrees Traveled **
# Use hp data frame adjusted
pcor.test(midlife_HP_df$loop_de_avg_degree, midlife_HP_df$avg_t1_hipp, midlife_HP_df$eTIV)
## estimate p.value statistic n gp Method
## 1 -0.1479331 0.3498185 -0.9460198 43 1 pearson
-Degrees Traveled rad3 is not significantly associated with t1 total hippocampus R= -0.96, p=0.34
# Use hp data frame adjusted
midlife_hp_rad3 <- midlife_HP_df %>% filter(!is.na(loop_de_rad3_degree))
pcor.test(midlife_hp_rad3$loop_de_rad3_degree, midlife_hp_rad3$avg_t1_hipp, midlife_hp_rad3$eTIV)
## estimate p.value statistic n gp Method
## 1 -0.1583098 0.3424721 -0.9619898 39 1 pearson
For young adults hippocampal., we will use freesurfer and VBM. We need to put things into scale with the midlife
Total N - 31
# Let's create a clean df to work with here and include only the columns we want
young_HP_df <-
young_raw_df %>% dplyr::select(
"subject_id",
"sex",
"age_spatial_years",
"loop_pe_rad3_m",
"loop_pe_avg_m",
"loop_de_rad3_degree",
"loop_de_avg_degree",
"loop_ae_rad3_degree",
"loop_ae_avg_degree",
"Left-Hippocampus",
"Right-Hippocampus",
"eTIV",
"VaisTIV_VBM"
) %>% mutate(avg_t1_hipp = (.$`Left-Hippocampus` + .$`Right-Hippocampus`) /
2) %>% filter(!is.na(eTIV)) # Need to make sure we remove subj without scan # N=43
young_HP_female_df <- young_HP_df %>% filter(sex=="Female")
young_HP_male_df <- young_HP_df %>% filter(sex== "Male")
knitr::kable(normality_young_HP) %>% kable_styling(bootstrap_options = c("striped", "hover", "condensed")) %>% scroll_box(width = "800px", height = "300px")
| statistic | pvalue | method | variable |
|---|---|---|---|
| 0.891289448362091 | 0.0406097060033547 | Shapiro-Wilk normality test | young_HP_df\(loop_pe_rad3_m </td> </tr> <tr> <td style="text-align:left;"> 0.917622398774867 </td> <td style="text-align:left;"> 0.410869741382047 </td> <td style="text-align:left;"> Shapiro-Wilk normality test </td> <td style="text-align:left;"> young_HP_female_df\)loop_pe_rad3_m |
| 0.819554493642225 | 0.0250296107499683 | Shapiro-Wilk normality test | young_HP_male_df\(loop_pe_rad3_m </td> </tr> <tr> <td style="text-align:left;"> 0.924692504181416 </td> <td style="text-align:left;"> 0.031503217012436 </td> <td style="text-align:left;"> Shapiro-Wilk normality test </td> <td style="text-align:left;"> young_HP_df\)loop_pe_avg_m |
| 0.942325549335712 | 0.548211885252153 | Shapiro-Wilk normality test | young_HP_female_df\(loop_pe_avg_m </td> </tr> <tr> <td style="text-align:left;"> 0.847440833424306 </td> <td style="text-align:left;"> 0.00483057995234475 </td> <td style="text-align:left;"> Shapiro-Wilk normality test </td> <td style="text-align:left;"> young_HP_male_df\)loop_pe_avg_m |
| 0.944415104032301 | 0.344201630583619 | Shapiro-Wilk normality test | young_HP_df\(loop_de_rad3_degree </td> </tr> <tr> <td style="text-align:left;"> 0.938516537642211 </td> <td style="text-align:left;"> 0.596545695693001 </td> <td style="text-align:left;"> Shapiro-Wilk normality test </td> <td style="text-align:left;"> young_HP_female_df\)loop_de_rad3_degree |
| 0.8802028757242 | 0.131195427127208 | Shapiro-Wilk normality test | young_HP_male_df\(loop_de_rad3_degree </td> </tr> <tr> <td style="text-align:left;"> 0.955146425146202 </td> <td style="text-align:left;"> 0.216072493905156 </td> <td style="text-align:left;"> Shapiro-Wilk normality test </td> <td style="text-align:left;"> young_HP_df\)loop_de_avg_degree |
| 0.919874033596885 | 0.317592303908437 | Shapiro-Wilk normality test | young_HP_female_df\(loop_de_avg_degree </td> </tr> <tr> <td style="text-align:left;"> 0.924156119557848 </td> <td style="text-align:left;"> 0.119146358186268 </td> <td style="text-align:left;"> Shapiro-Wilk normality test </td> <td style="text-align:left;"> young_HP_male_df\)loop_de_avg_degree |
| 0.829226875499169 | 0.00405239215676323 | Shapiro-Wilk normality test | young_HP_df\(loop_ae_rad3_degree </td> </tr> <tr> <td style="text-align:left;"> 0.869160922556705 </td> <td style="text-align:left;"> 0.147906350957043 </td> <td style="text-align:left;"> Shapiro-Wilk normality test </td> <td style="text-align:left;"> young_HP_female_df\)loop_ae_rad3_degree |
| 0.743377568430897 | 0.0029639851230227 | Shapiro-Wilk normality test | young_HP_male_df\(loop_ae_rad3_degree </td> </tr> <tr> <td style="text-align:left;"> 0.890867713779674 </td> <td style="text-align:left;"> 0.00429709005526902 </td> <td style="text-align:left;"> Shapiro-Wilk normality test </td> <td style="text-align:left;"> young_HP_df\)loop_ae_avg_degree |
| 0.918178775417547 | 0.303762177188411 | Shapiro-Wilk normality test | young_HP_female_df\(loop_ae_avg_degree </td> </tr> <tr> <td style="text-align:left;"> 0.825228129557514 </td> <td style="text-align:left;"> 0.00210709055367868 </td> <td style="text-align:left;"> Shapiro-Wilk normality test </td> <td style="text-align:left;"> young_HP_male_df\)loop_ae_avg_degree |
young_loop_summarystats <- young_HP_df %>%
group_by(sex) %>%
summarize(n_subject = n(),
age_mean = mean(age_spatial_years),
age_Sd = sd(age_spatial_years),
AE_rad3 = mean(loop_ae_rad3_degree,na.rm=TRUE),
AE_avg = mean(loop_ae_avg_degree ,na.rm=TRUE),
PE_rad3 = mean(loop_pe_rad3_m,na.rm=TRUE),
PE_avg = mean(loop_pe_avg_m,na.rm=TRUE),
DT_rad3 = mean(loop_de_rad3_degree,na.rm=TRUE),
DT_avg = mean(loop_de_avg_degree,na.rm=TRUE)) %>% as.data.frame()
knitr::kable(young_loop_summarystats) %>% kable_styling(bootstrap_options = c("striped", "hover", "condensed")) %>% scroll_box(width = "800px", height = "200")
| sex | n_subject | age_mean | age_Sd | AE_rad3 | AE_avg | PE_rad3 | PE_avg | DT_rad3 | DT_avg |
|---|---|---|---|---|---|---|---|---|---|
| Female | 11 | 20.81818 | 2.561959 | 63.95039 | 52.86314 | 2.851491 | 1.589096 | 362.7044 | 368.8774 |
| Male | 20 | 20.25000 | 2.531382 | 54.86657 | 44.32274 | 2.487910 | 1.273323 | 352.4536 | 364.8809 |
Shuying originally did an adjustment. so that’s waht we’re doing below. First we need to correct the T1 hippocampal volumes for TIV. T1 volumes and TIV are coming from freesurfers recon all.
# Okay so we need to do a bit of changing here by bringing our VBM to scale with midlife
young_HP_df_adj <- young_HP_df %>% mutate(VaisTIV_VBM = VaisTIV_VBM*1000)
#v contains adjusted hip
# now we create the function for adjusting by TIV
# 1 Create function for apply to variables
Young_dividebyTIV <- function(x, na.rm = FALSE) (x/young_HP_df_adj$VaisTIV_VBM)
# 2 Let's correct by mutating the columns using the TIV from freesurfer
young_HP_df_adj <- young_HP_df_adj %>% mutate_at(vars(avg_t1_hipp, `Left-Hippocampus`, `Right-Hippocampus`),
Young_dividebyTIV) %>%
# multiplying to get proportions
mutate(avg_t1_hipp = avg_t1_hipp*1000,
`Left-Hippocampus` = `Left-Hippocampus`*1000,
`Right-Hippocampus` = `Right-Hippocampus`*1000)
** position Error **
# Use hp data frame adjusted
ggscatter(young_HP_df_adj, x = "avg_t1_hipp", y = "loop_pe_avg_m",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "spearman",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Position Error at average (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 1 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 1 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 1 rows containing missing values (`geom_point()`).
# Use hp data frame adjusted
ggscatter(young_HP_df_adj, x = "avg_t1_hipp", y = "loop_pe_rad3_m",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "spearman",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Position Error at aver3.0 (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 14 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 14 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 14 rows containing missing values (`geom_point()`).
** Angular Error **
# Use hp data frame adjusted
ggscatter(young_HP_df_adj, x = "avg_t1_hipp", y = "loop_ae_avg_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "spearman",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Angular Error at average (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 1 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 1 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 1 rows containing missing values (`geom_point()`).
# Use hp data frame adjusted
ggscatter(young_HP_df_adj, x = "avg_t1_hipp", y = "loop_ae_rad3_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "spearman",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Angular Error at aver3.0 (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 14 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 14 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 14 rows containing missing values (`geom_point()`).
** Degrees Traveled **
# Use hp data frame adjusted
ggscatter(young_HP_df_adj, x = "avg_t1_hipp", y = "loop_de_avg_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Degrees Traveled at average (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 1 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 1 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 1 rows containing missing values (`geom_point()`).
# Use hp data frame adjusted
ggscatter(young_HP_df_adj, x = "avg_t1_hipp", y = "loop_de_rad3_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Degrees Traveled at aver3.0 (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 14 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 14 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 14 rows containing missing values (`geom_point()`).
** position Error **
# Use hp data frame adjusted
ggscatter(young_HP_df_adj, x = "Right-Hippocampus", y = "loop_pe_avg_m",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "spearman",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Position Error at average (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 1 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 1 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 1 rows containing missing values (`geom_point()`).
# Use hp data frame adjusted
ggscatter(young_HP_df_adj, x = "Right-Hippocampus", y = "loop_pe_rad3_m",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "spearman",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Position Error at aver3.0 (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 14 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 14 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 14 rows containing missing values (`geom_point()`).
** Angular Error **
# Use hp data frame adjusted
ggscatter(young_HP_df_adj, x = "Right-Hippocampus", y = "loop_ae_avg_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "spearman",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Angular Error at average (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 1 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 1 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 1 rows containing missing values (`geom_point()`).
-not normal use spearman
# Use hp data frame adjusted
ggscatter(young_HP_df_adj, x = "Right-Hippocampus", y = "loop_ae_rad3_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "spearman",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Angular Error at aver3.0 (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 14 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 14 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 14 rows containing missing values (`geom_point()`).
** Degrees Traveled **
# Use hp data frame adjusted
ggscatter(young_HP_df_adj, x = "Right-Hippocampus", y = "loop_de_avg_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Degrees Traveled at average (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 1 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 1 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 1 rows containing missing values (`geom_point()`).
# Use hp data frame adjusted
ggscatter(young_HP_df_adj, x = "Right-Hippocampus", y = "loop_de_rad3_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Degrees Traveled at aver3.0 (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 14 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 14 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 14 rows containing missing values (`geom_point()`).
** position Error **
# Use hp data frame adjusted
ggscatter(young_HP_df_adj, x = "Left-Hippocampus", y = "loop_pe_avg_m",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "spearman",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Position Error at average (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 1 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 1 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 1 rows containing missing values (`geom_point()`).
# Use hp data frame adjusted
ggscatter(young_HP_df_adj, x = "Left-Hippocampus", y = "loop_pe_rad3_m",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "spearman",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Position Error at aver3.0 (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 14 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 14 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 14 rows containing missing values (`geom_point()`).
** Angular Error **
# Use hp data frame adjusted
ggscatter(young_HP_df_adj, x = "Left-Hippocampus", y = "loop_ae_avg_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "spearman",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Angular Error at average (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 1 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 1 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 1 rows containing missing values (`geom_point()`).
-not normal use spearman
# Use hp data frame adjusted
ggscatter(young_HP_df_adj, x = "Left-Hippocampus", y = "loop_ae_rad3_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "spearman",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Angular Error at aver3.0 (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 14 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 14 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 14 rows containing missing values (`geom_point()`).
** Degrees Traveled **
# Use hp data frame adjusted
ggscatter(young_HP_df_adj, x = "Left-Hippocampus", y = "loop_de_avg_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Degrees Traveled at average (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 1 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 1 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 1 rows containing missing values (`geom_point()`).
# Use hp data frame adjusted
ggscatter(young_HP_df_adj, x = "Left-Hippocampus", y = "loop_de_rad3_degree",
add = "reg.line",
add.params = list(color = "black", fill = "lightgray"), # Customize reg. line
cor.coef = TRUE, # Add correlation coefficient. see ?stat_cor
conf.int = TRUE,
cor.method = "pearson",
cor.coeff.args = list(label.sep = "\n"),
xlab = "Averaged GMV Volume", ylab = "Degrees Traveled at aver3.0 (m)", ) +
theme(legend.position = "top", legend.title=element_blank())
## `geom_smooth()` using formula = 'y ~ x'
## Warning: Removed 14 rows containing non-finite values (`stat_smooth()`).
## Warning: Removed 14 rows containing non-finite values (`stat_cor()`).
## Warning: Removed 14 rows containing missing values (`geom_point()`).